Goto

Collaborating Authors

 gat 0


A Flexible Generative Framework for Graph-based Semi-supervised Learning

Jiaqi Ma, Weijing Tang, Ji Zhu, Qiaozhu Mei

Neural Information Processing Systems

We consider a family of problems that are concerned about making predictions for the majority of unlabeled, graph-structured data samples based on a small proportion of labeled samples. Relational information among the data samples, often encoded in the graph/network structure, is shown to be helpful for these semi-supervisedlearningtasks.


5975754c7650dfee0682e06e1fec0522-Supplemental-Conference.pdf

Neural Information Processing Systems

Both models consist of 2 layers and the hidden dimension is fixed to 64. We add a weight decay of 5e-4 for Cora, Citeseer, and Pubmed,and0fortherest. The optimizer configuration and the training schedule are the same as Section A.2. Kh(c ˆci) (7) where i N V denotes the evaluated node, andh is the bandwidth of the kernel function. The classwise-ECEs are summarized in Table 3, and the KDE-ECEs are collected in Table 4. Weadopt a heuristic which proportionally rescales the non top-1 output probabilities so that the calibrated probabilistic output sums up to one. While the ECEs ofCaGCN inits original paper are promising [23], we observethat the ECEs of CaGCN are often unstable and sometimes even worse than that of the uncalibrated model in our experiments.


HGEN: Heterogeneous Graph Ensemble Networks

Shen, Jiajun, Jin, Yufei, He, Yi, Zhu, Xingquan

arXiv.org Artificial Intelligence

This paper presents HGEN that pioneers ensemble learning for heterogeneous graphs. We argue that the heterogeneity in node types, nodal features, and local neighborhood topology poses significant challenges for ensemble learning, particularly in accommodating diverse graph learners. Our HGEN framework ensembles multiple learners through a meta-path and transformation-based optimization pipeline to uplift classification accuracy. Specifically, HGEN uses meta-path combined with random dropping to create Allele Graph Neural Networks (GNNs), whereby the base graph learners are trained and aligned for later ensembling. To ensure effective ensemble learning, HGEN presents two key components: 1) a residual-attention mechanism to calibrate allele GNNs of different meta-paths, thereby enforcing node embeddings to focus on more informative graphs to improve base learner accuracy, and 2) a correlation-regularization term to enlarge the disparity among embedding matrices generated from different meta-paths, thereby enriching base learner diversity. We analyze the convergence of HGEN and attest its higher regularization magnitude over simple voting. Experiments on five heterogeneous networks validate that HGEN consistently outperforms its state-of-the-art competitors by substantial margin.


GSTBench: A Benchmark Study on the Transferability of Graph Self-Supervised Learning

Song, Yu, Hua, Zhigang, Xie, Yan, Liu, Jingzhe, Long, Bo, Liu, Hui

arXiv.org Artificial Intelligence

Self-supervised learning (SSL) has shown great promise in graph representation learning. However, most existing graph SSL methods are developed and evaluated under a single-dataset setting, leaving their cross-dataset transferability largely unexplored and limiting their ability to leverage knowledge transfer and large-scale pretraining, factors that are critical for developing generalized intelligence beyond fitting training data. To address this gap and advance foundation model research for graphs, we present GSTBench, the first systematic benchmark for evaluating the transferability of graph SSL methods. We conduct large-scale pretraining on ogbn-papers100M and evaluate five representative SSL methods across a diverse set of target graphs. Our standardized experimental setup decouples confounding factors such as model architecture, dataset characteristics, and adaptation protocols, enabling rigorous comparisons focused solely on pretraining objectives. Surprisingly, we observe that most graph SSL methods struggle to generalize, with some performing worse than random initialization. In contrast, GraphMAE, a masked autoencoder approach, consistently improves transfer performance. We analyze the underlying factors that drive these differences and offer insights to guide future research on transferable graph SSL, laying a solid foundation for the "pretrain-then-transfer" paradigm in graph learning. Our code is available at https://github.com/SongYYYY/GSTBench.


Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks

Niu, Peizhi, Pan, Chao, Chen, Siheng, Milenkovic, Olgica

arXiv.org Artificial Intelligence

Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities for tasks such as social networks and medical data analysis. Despite their successes, GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA), which threaten privacy by identifying whether a record was part of the model's training data. While existing research has explored MIA in GNNs under graph inductive learning settings, the more common and challenging graph transductive learning setting remains understudied in this context. This paper addresses this gap and proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics. The gist of our approach is a combination of a train-test alternate training schedule and flattening strategy, which successfully reduces the difference between the training and testing loss distributions. Extensive empirical results demonstrate the superior performance of our method (a decrease in attack AUROC by $9.42\%$ and an increase in utility performance by $18.08\%$ on average compared to LBP), highlighting its potential for seamless integration into various classification models with minimal overhead.


Improving the interpretability of GNN predictions through conformal-based graph sparsification

Sanchez-Martin, Pablo, Khan, Kinaan Aamir, Valera, Isabel

arXiv.org Machine Learning

Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks. However, most GNN architectures aggregate information from all nodes and edges in a graph, regardless of their relevance to the task at hand, thus hindering the interpretability of their predictions. In contrast to prior work, in this paper we propose a GNN \emph{training} approach that jointly i) finds the most predictive subgraph by removing edges and/or nodes -- -\emph{without making assumptions about the subgraph structure} -- while ii) optimizing the performance of the graph classification task. To that end, we rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions to account for the current in-training uncertainty of the classifier. Our empirical results on nine different graph classification datasets show that our method competes in performance with baselines while relying on significantly sparser subgraphs, leading to more interpretable GNN-based predictions.


Unlink to Unlearn: Simplifying Edge Unlearning in GNNs

Tan, Jiajun, Sun, Fei, Qiu, Ruichen, Su, Du, Shen, Huawei

arXiv.org Artificial Intelligence

As concerns over data privacy intensify, unlearning in Graph Neural Networks (GNNs) has emerged as a prominent research frontier in academia. This concept is pivotal in enforcing the \textit{right to be forgotten}, which entails the selective removal of specific data from trained GNNs upon user request. Our research focuses on edge unlearning, a process of particular relevance to real-world applications. Current state-of-the-art approaches like GNNDelete can eliminate the influence of specific edges yet suffer from \textit{over-forgetting}, which means the unlearning process inadvertently removes excessive information beyond needed, leading to a significant performance decline for remaining edges. Our analysis identifies the loss functions of GNNDelete as the primary source of over-forgetting and also suggests that loss functions may be redundant for effective edge unlearning. Building on these insights, we simplify GNNDelete to develop \textbf{Unlink to Unlearn} (UtU), a novel method that facilitates unlearning exclusively through unlinking the forget edges from graph structure. Our extensive experiments demonstrate that UtU delivers privacy protection on par with that of a retrained model while preserving high accuracy in downstream tasks, by upholding over 97.3\% of the retrained model's privacy protection capabilities and 99.8\% of its link prediction accuracy. Meanwhile, UtU requires only constant computational demands, underscoring its advantage as a highly lightweight and practical edge unlearning solution.


Graph Edits for Counterfactual Explanations: A Unified GNN Approach

Chaidos, Nikolaos, Dimitriou, Angeliki, Lymperaiou, Maria, Stamou, Giorgos

arXiv.org Artificial Intelligence

Counterfactuals have been established as a popular explainability technique which leverages a set of minimal edits to alter the prediction of a classifier. When considering conceptual counterfactuals, the edits requested should correspond to salient concepts present in the input data. At the same time, conceptual distances are defined by knowledge graphs, ensuring the optimality of conceptual edits. In this work, we extend previous endeavors on conceptual counterfactuals by introducing \textit{graph edits as counterfactual explanations}: should we represent input data as graphs, which is the shortest graph edit path that results in an alternative classification label as provided by a black-box classifier?


GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with Realistic Access to GNN Models

Song, Zeyu, Kabir, Ehsanul, Mehnaz, Shagufta

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have increasingly become an indispensable tool in learning from graph-structured data, catering to various applications including social network analysis, recommendation systems, etc. At the heart of these networks are the edges which are crucial in guiding GNN models' predictions. In many scenarios, these edges represent sensitive information, such as personal associations or financial dealings -- thus requiring privacy assurance. However, their contributions to GNN model predictions may in turn be exploited by the adversary to compromise their privacy. Motivated by these conflicting requirements, this paper investigates edge privacy in contexts where adversaries possess black-box GNN model access, restricted further by access controls, preventing direct insights into arbitrary node outputs. In this context, we introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs. These strategies allow adversaries to deduce connections between two nodes not by directly analyzing the model's output for these pairs but by analyzing the output for nodes linked to them. Our evaluation with seven real-life datasets and four GNN architectures underlines a significant vulnerability: even in systems fortified with access control mechanisms, an adaptive adversary can decipher private connections between nodes, thereby revealing potentially sensitive relationships and compromising the confidentiality of the graph.


Graph Generative Model for Benchmarking Graph Neural Networks

Yoon, Minji, Wu, Yue, Palowitch, John, Perozzi, Bryan, Salakhutdinov, Ruslan

arXiv.org Artificial Intelligence

As the field of Graph Neural Networks (GNN) continues to grow, it experiences a corresponding increase in the need for large, real-world datasets to train and test new GNN models on challenging, realistic problems. Unfortunately, such graph datasets are often generated from online, highly privacy-restricted ecosystems, which makes research and development on these datasets hard, if not impossible. This greatly reduces the amount of benchmark graphs available to researchers, causing the field to rely only on a handful of publicly-available datasets. To address this problem, we introduce a novel graph generative model, Computation Graph Transformer (CGT) that learns and reproduces the distribution of real-world graphs in a privacy-controlled way. More specifically, CGT (1) generates effective benchmark graphs on which GNNs show similar task performance as on the source graphs, (2) scales to process large-scale graphs, (3) incorporates off-the-shelf privacy modules to guarantee end-user privacy of the generated graph. Extensive experiments across a vast body of graph generative models show that only our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.